2D低剂量单板腹部计算机断层扫描(CT)切片可直接测量身体成分,这对于对衰老的健康关系进行定量表征至关重要。然而,由于不同年内获得的纵向切片之间的位置方差,使用2D腹部切片对人体成分变化的纵向分析具有挑战性。为了减少位置差异,我们将条件生成模型扩展到我们的C-斜肌,该模型在腹部区域进行任意轴向切片作为条件,并通过估计潜在空间的结构变化来生成定义的椎骨水平切片。对来自内部数据集的1170名受试者的实验和BTCV Miccai挑战赛的50名受试者的实验表明,我们的模型可以从现实主义和相似性方面产生高质量的图像。来自巴尔的摩纵向研究(BLSA)数据集的20名受试者的外部实验,其中包含纵向单腹部切片验证了我们的方法可以在肌肉和内脏脂肪面积方面与切片的位置方差进行协调。我们的方法提供了一个有希望的方向,将切片从不同的椎骨水平映射到目标切片,以减少单个切片纵向分析的位置差异。源代码可在以下网址获得:https://github.com/masilab/c-slicegen。
translated by 谷歌翻译
Transformer-based models, capable of learning better global dependencies, have recently demonstrated exceptional representation learning capabilities in computer vision and medical image analysis. Transformer reformats the image into separate patches and realize global communication via the self-attention mechanism. However, positional information between patches is hard to preserve in such 1D sequences, and loss of it can lead to sub-optimal performance when dealing with large amounts of heterogeneous tissues of various sizes in 3D medical image segmentation. Additionally, current methods are not robust and efficient for heavy-duty medical segmentation tasks such as predicting a large number of tissue classes or modeling globally inter-connected tissues structures. Inspired by the nested hierarchical structures in vision transformer, we proposed a novel 3D medical image segmentation method (UNesT), employing a simplified and faster-converging transformer encoder design that achieves local communication among spatially adjacent patch sequences by aggregating them hierarchically. We extensively validate our method on multiple challenging datasets, consisting anatomies of 133 structures in brain, 14 organs in abdomen, 4 hierarchical components in kidney, and inter-connected kidney tumors). We show that UNesT consistently achieves state-of-the-art performance and evaluate its generalizability and data efficiency. Particularly, the model achieves whole brain segmentation task complete ROI with 133 tissue classes in single network, outperforms prior state-of-the-art method SLANT27 ensembled with 27 network tiles, our model performance increases the mean DSC score of the publicly available Colin and CANDI dataset from 0.7264 to 0.7444 and from 0.6968 to 0.7025, respectively.
translated by 谷歌翻译
通过利用未标记的对话框数据来开发半监督的面向任务的对话框(TOD)系统已吸引了越来越多的兴趣。对于对潜在状态TOD模型的半监督学习,经常使用变异学习,但遭受了通过离散潜在变量传播的梯度的令人讨厌的高度变化,以及间接优化目标对数的弊端。最近,一种称为关节随机近似(JSA)的替代算法已出现,用于学习具有令人印象深刻的性能的离散潜在可变模型。在本文中,我们建议将JSA应用于对潜在状态TOD模型的半监督学习,该模型称为JSA-TOD。据我们所知,JSA-TOD代表了开发基于JSA的半监督学习的第一批工作,用于对TOD系统(例如TOD系统)这样的长期顺序生成问题的离散潜在可变条件模型。广泛的实验表明,JSA-TOD明显优于其变异学习对应物。值得注意的是,使用20%标签的半监督JSA-TOD在Multiwoz2.1上的全面监督基线附近。
translated by 谷歌翻译
我们考虑了自主渠道访问(AutoCA)的问题,其中一组终端试图以分布式方式通过常见的无线通道发现具有访问点(AP)的通信策略。由于拓扑不规则和终端的通信范围有限,因此对AutoCA的实用挑战是隐藏的终端问题,在无线网络中臭名昭著,可以使吞吐量和延迟性能恶化。为了应对挑战,本文提出了一种新的多代理深钢筋学习范式,该学习范式被称为Madrl-HT,在存在隐藏码头的情况下为Autoca量身定制。 MADRL-HT利用拓扑见解,并将每个终端的观察空间转变为独立于终端数量的可扩展形式。为了补偿部分可观察性,我们提出了一种外观机制,以便终端可以从载体感知的通道状态以及AP的反馈中推断出其隐藏终端的行为。提出了基于窗口的全球奖励功能,从而指示终端在学习过程中平衡终端的传输机会,以最大程度地提高系统吞吐量。广泛的数值实验验证了我们的解决方案基准测试的优越性能,并通过避免碰撞(CSMA/CA)方案对旧的载体 - 义值访问。
translated by 谷歌翻译
医学图像分割或计算voxelwise语义面具是一个基本又具有挑战性的任务,用于计算体素级语义面具。为了提高编码器 - 解码器神经网络在大型临床队列中执行这项任务的能力,对比学习提供了稳定模型初始化和增强编码器而无需标签的机会。然而,多个目标对象(具有不同的语义含义)可能存在于单个图像中,这使得适应传统的对比学习方法从普遍的“图像级分类”到“像素级分段”中的问题。在本文中,我们提出了一种简单的语义感知对比学习方法,利用注意掩模来推进多对象语义分割。简而言之,我们将不同的语义对象嵌入不同的群集而不是传统的图像级嵌入。我们在与内部数据和Miccai挑战2015 BTCV数据集中的多器官医学图像分段任务中评估我们提出的方法。与目前的最先进的培训策略相比,我们拟议的管道分别产生了两种医学图像分割队列的骰子评分的大幅提高5.53%和6.09%(P值<0.01)。通过Pascal VOC 2012 DataSet进一步评估了所提出的方法的性能,并在MiOU(P值<0.01)上实现了2.75%的大幅提高。
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
Recent investigations on rotation invariance for 3D point clouds have been devoted to devising rotation-invariant feature descriptors or learning canonical spaces where objects are semantically aligned. Examinations of learning frameworks for invariance have seldom been looked into. In this work, we review rotation invariance in terms of point cloud registration and propose an effective framework for rotation invariance learning via three sequential stages, namely rotation-invariant shape encoding, aligned feature integration, and deep feature registration. We first encode shape descriptors constructed with respect to reference frames defined over different scales, e.g., local patches and global topology, to generate rotation-invariant latent shape codes. Within the integration stage, we propose Aligned Integration Transformer to produce a discriminative feature representation by integrating point-wise self- and cross-relations established within the shape codes. Meanwhile, we adopt rigid transformations between reference frames to align the shape codes for feature consistency across different scales. Finally, the deep integrated feature is registered to both rotation-invariant shape codes to maximize feature similarities, such that rotation invariance of the integrated feature is preserved and shared semantic information is implicitly extracted from shape codes. Experimental results on 3D shape classification, part segmentation, and retrieval tasks prove the feasibility of our work. Our project page is released at: https://rotation3d.github.io/.
translated by 谷歌翻译
With the attention mechanism, transformers achieve significant empirical successes. Despite the intuitive understanding that transformers perform relational inference over long sequences to produce desirable representations, we lack a rigorous theory on how the attention mechanism achieves it. In particular, several intriguing questions remain open: (a) What makes a desirable representation? (b) How does the attention mechanism infer the desirable representation within the forward pass? (c) How does a pretraining procedure learn to infer the desirable representation through the backward pass? We observe that, as is the case in BERT and ViT, input tokens are often exchangeable since they already include positional encodings. The notion of exchangeability induces a latent variable model that is invariant to input sizes, which enables our theoretical analysis. - To answer (a) on representation, we establish the existence of a sufficient and minimal representation of input tokens. In particular, such a representation instantiates the posterior distribution of the latent variable given input tokens, which plays a central role in predicting output labels and solving downstream tasks. - To answer (b) on inference, we prove that attention with the desired parameter infers the latent posterior up to an approximation error, which is decreasing in input sizes. In detail, we quantify how attention approximates the conditional mean of the value given the key, which characterizes how it performs relational inference over long sequences. - To answer (c) on learning, we prove that both supervised and self-supervised objectives allow empirical risk minimization to learn the desired parameter up to a generalization error, which is independent of input sizes. Particularly, in the self-supervised setting, we identify a condition number that is pivotal to solving downstream tasks.
translated by 谷歌翻译
In the new era of personalization, learning the heterogeneous treatment effect (HTE) becomes an inevitable trend with numerous applications. Yet, most existing HTE estimation methods focus on independently and identically distributed observations and cannot handle the non-stationarity and temporal dependency in the common panel data setting. The treatment evaluators developed for panel data, on the other hand, typically ignore the individualized information. To fill the gap, in this paper, we initialize the study of HTE estimation in panel data. Under different assumptions for HTE identifiability, we propose the corresponding heterogeneous one-side and two-side synthetic learner, namely H1SL and H2SL, by leveraging the state-of-the-art HTE estimator for non-panel data and generalizing the synthetic control method that allows flexible data generating process. We establish the convergence rates of the proposed estimators. The superior performance of the proposed methods over existing ones is demonstrated by extensive numerical studies.
translated by 谷歌翻译
Classification using supervised learning requires annotating a large amount of classes-balanced data for model training and testing. This has practically limited the scope of applications with supervised learning, in particular deep learning. To address the issues associated with limited and imbalanced data, this paper introduces a sample-efficient co-supervised learning paradigm (SEC-CGAN), in which a conditional generative adversarial network (CGAN) is trained alongside the classifier and supplements semantics-conditioned, confidence-aware synthesized examples to the annotated data during the training process. In this setting, the CGAN not only serves as a co-supervisor but also provides complementary quality examples to aid the classifier training in an end-to-end fashion. Experiments demonstrate that the proposed SEC-CGAN outperforms the external classifier GAN (EC-GAN) and a baseline ResNet-18 classifier. For the comparison, all classifiers in above methods adopt the ResNet-18 architecture as the backbone. Particularly, for the Street View House Numbers dataset, using the 5% of training data, a test accuracy of 90.26% is achieved by SEC-CGAN as opposed to 88.59% by EC-GAN and 87.17% by the baseline classifier; for the highway image dataset, using the 10% of training data, a test accuracy of 98.27% is achieved by SEC-CGAN, compared to 97.84% by EC-GAN and 95.52% by the baseline classifier.
translated by 谷歌翻译